-
Notifications
You must be signed in to change notification settings - Fork 330
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for smp in multiple DMs #910
Conversation
I'm sorry, but my understanding is that the SMP support in RISC-V is not designed to work across multiple debug modules. I am afraid that this simple change won't make it work. (@timsifive please correct me if I am mistaken.) In any case, if harts are managed by multiple different DMs, it is not even possible to accomplish the near-instataneous halt & resume -- which is one of the main features of SMP debugging. So this is not just OpenOCD limitation, this is also a limitation of the given HW. |
Thank you for your reply, this is not a perfect solution, because it can only accomplish the instataneous halt&resume of the harts on each cluster through hartwindow. But even if the external trigger in the debug 1.0 Spec supports hardware to realize simultaneous halt/resume, this problem will still exist. So my idea is to let openocd report error when user config cross-cluster smp (by another patch), or to support similar imperfect cross-cluster smp implementations(by this patch). |
…x of each hart simply
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can pass -coreid
to target create
commands, which means you don't need this change. Does that not work?
@@ -2027,6 +2026,10 @@ static int examine(struct target *target) | |||
return ERROR_FAIL; | |||
} | |||
|
|||
/* The RISC-V hartid is sequential, and the index of each hart | |||
* on the Debug Module should start at 0 and be contiguous. */ | |||
info->index = target->coreid % dm->hart_count; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doesn't this assume that each DM has the same number of harts?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your reminder, I didn't consider the situation that each DM has different number of harts, if this situation happens, user configuration will become unfriendly. So I currently have two ideas to solve it:
Config modification
config like
# cluster 0
target create $_TARGETNAME_0 riscv -chain-position $_CHIPNAME.cpu -coreid 0 -index 0 -rtos hwthread
target create $_TARGETNAME_1 riscv -chain-position $_CHIPNAME.cpu -coreid 1 -index 1
# cluster 1
target create $_TARGETNAME_2 riscv -chain-position $_CHIPNAME.cpu -coreid 2 -index 0 -dbgbase 0x400 -rtos hwthread
target create $_TARGETNAME_3 riscv -chain-position $_CHIPNAME.cpu -coreid 3 -index 1 -dbgbase 0x400
target smp $_TARGETNAME_0 $_TARGETNAME_1 $_TARGETNAME_2 $_TARGETNAME_3
Modify the 'jim_target_create' function to get the target->index.
info->index get
https://github.com/riscv/riscv-openocd/blob/699eecaab434337dc3915171606b0548c48c6d51/src/target/riscv/riscv-013.c#L1900
changed to
info->index = target->index;
Get different threadids through hartid
config like
# cluster 0
target create $_TARGETNAME_0 riscv -chain-position $_CHIPNAME.cpu -coreid 0 -rtos hwthread
target create $_TARGETNAME_1 riscv -chain-position $_CHIPNAME.cpu -coreid 1
# cluster 1
target create $_TARGETNAME_2 riscv -chain-position $_CHIPNAME.cpu -coreid 0 -dbgbase 0x400 -rtos hwthread
target create $_TARGETNAME_3 riscv -chain-position $_CHIPNAME.cpu -coreid 1 -dbgbase 0x400
target smp $_TARGETNAME_0 $_TARGETNAME_1 $_TARGETNAME_2 $_TARGETNAME_3
Make appropriate modifications to the "threadid_from_target' function, I still don’t know how to modify it
static inline threadid_t threadid_from_target(const struct target *target)
{
#return target->coreid + 1;
...
}
thread id should be 0/1/2/3
This patch about Support for smp in multiple DMs: #907, cross-cluster smp cannot be realized by passing |
That's not obvious to me. What happens when you pass coreid like this:
|
info->index stores the index of each hart on the DM. For each DM these must start at 0 and be contiguous. So that then means that to select the first hart on cluster 1 (DM 1), I write 2 to dmcontrol.hartsel, it won't work. The first hart in a DM must be selected by dmcontrol.hartsel=0, the second by dmcontrol.hartsel=1, and so on. |
OK, so what happens if you do:
|
If I only smp harts in the a DM, 2 gdb were used to debug, nothing happens
So the harts in multiple DMs cannot be smp to halt/resume together, and I want to get some advice about this |
Hmm.... the comment for target.core_id says Is that fixable? That seems like a better fix than RISC-V-specific heuristics. |
As I understand, target->coreid is used to specify which device on the TAP and define a unique threadid (dgb) for each target. If I smp all target on multiple clusters, there is a bug. |
thanks, we will try to fix it in upstream openocd |
There is an issue about Support for smp in multiple DMs: #907
When I debug 2 DMs and each with 2 harts , hart count my config file is like
this config file works with the code
info->index = target->coreid % dm->hart_count;
dm->hart_count was 4, so when I write 4/5 to coreid, I can write 0/1 to info->index. For each Debug Module, the index of each hart can start at 0 and be contiguous.
openocd log shown